import os
import xai
import logging as log
import warnings
import matplotlib.pyplot as plt
from util.commons import *
from util.ui import *
from util.model import *
from util.split import *
from util.dataset import *
from IPython.display import display, HTML
For this example we are going to use 'Adult Census Dataset'. It consists of both categorical and numerical features. In the output of the cell below, we can see the first five elements (head) of the dataset.
dataset, msg = get_dataset('census')
There are a lot of data visualisation techniques that can be used to analyze a dataset. In this example we will use three functions offered by the XAI module.
%matplotlib inline
plt.style.use('ggplot')
warnings.filterwarnings('ignore')
imbalanced_cols = ['gender', 'ethnicity']
xai.imbalance_plot(dataset.df, *imbalanced_cols)
xai.correlations(dataset.df, include_categorical=True, plot_type="matrix")
xai.correlations(dataset.df, include_categorical=True)
In the cell below the target variable is selected. In this example we will use the column loan as target variable, which shows whether a person earns more than 50k per year. The features are split into a different dataframe.
df_X, df_y, msg = split_feature_target(dataset.df, "loan")
df_y
In this step three models are going to be trained on this dataset. All of them will be trained on the raw dataset (without any preprocessing). In the output below we could see classification reports for the trained models. The second model achieves the highest accuracity of ~0.84.
# Create three empty models
initial_models, msg = fill_empty_models(df_X, df_y, 3)
models = []
# Train model 1
model1 = initial_models[0]
msg = fill_model(model1, Algorithm.LOGISTIC_REGRESSION, Split(SplitTypes.IMBALANCED, None))
models.append(model1)
# Train model 2
model2 = initial_models[1]
msg = fill_model(model2, Algorithm.RANDOM_FOREST, Split(SplitTypes.IMBALANCED, None))
models.append(model2)
# Train model 3
model3 = initial_models[2]
msg = fill_model(model3, Algorithm.DECISION_TREE, Split(SplitTypes.IMBALANCED, None))
models.append(model3)
In the following steps we will use global interpretation techniques that help us to answer questions like how does a model behave in general? What features drive predictions and what features are completely useless. This data may be very important in understanding the model better. Most of the techniques work by investigating the conditional interactions between the target variable and the features on the complete dataset.
The importance of a feature is the increase in the prediction error of the model after we permuted the feature’s values, which breaks the relationship between the feature and the true outcome. A feature is “important” if permuting it increases the model error. This is because in that case, the model relied heavily on this feature for making right prediction. On the other hand, a feature is “unimportant” if permuting it doesn’t affect the error by much or doesn’t change it at all.
In the first case, we use ELI5, which does not permute the features but only visualizes the weight of each feature. From the graphics we see, for instance, that marital-status=Married-civ-spouse plays an important role in all three models. In Model 2 and Model 3 age has a 'heavy' weight.
for model in models:
plot = generate_feature_importance_plot("ELI5", model)
display(plot)
In this step we use the Skater module, which permutes the features to generate a feature importance plot. From the plots below we can see that different features are important for the different models, despite of the fact that all models were trained on the same dataset.
%matplotlib inline
plt.rcParams['figure.figsize'] = [14, 15]
plt.style.use('ggplot')
warnings.filterwarnings('ignore')
for model in models:
_ = generate_feature_importance_plot("SKATER", model)
In the cell below we use the SHAP (SHapley Additive exPlanations). It uses a combination of feature contributions and game theory to come up with SHAP values. Then, it computes the global feature importance by taking the average of the SHAP value magnitudes across the dataset.
from shap import initjs
initjs()
%matplotlib inline
plt.style.use('ggplot')
warnings.filterwarnings('ignore')
for model in models:
generate_feature_importance_plot("SHAP", model)
In the examples above, we have used three different techniques for representing the feature importance of a model. We can see that there is a slight difference in the results, due to the different approaches that each module uses. The training algorithms also contribute to these reuslts. Two features that, in particular, stand out are age and education-num. Therefore, these two will later be used in the Partial Dependence Plots
The partial dependence plot (short PDP or PD plot) shows the marginal effect one or two features have on the predicted outcome of a machine learning model. A partial dependence plot can show whether the relationship between the target and a feature is linear, monotonic or more complex. For example, when applied to a linear regression model, partial dependence plots always show a linear relationship.
PDPBox is the first module that we use for ploting partial dependence. We will generate two plots, one for only one feature - age and one for two features - age and education-num.
for model in models:
generate_pdp_plots("PDPBox", model, "age", "None")
generate_pdp_plots("PDPBox", model, "age", "education-num")
In the two examples below we will use Skater and SHAP for generating PDPs using features: age and education-num.
for model in models:
generate_pdp_plots("SKATER", model, "age", "education-num")
for model in models:
generate_pdp_plots("SHAP", model, "age", "education-num")
In the cells above, we have used three different modules for plotting PDPs. They use different types of plots but the results look similiar for all models. In conclusion, we can say that without surprise that older (around 60) and more educated people have greater chances of earning more than 50k per year.
Local interpretation focuses on specifics of each individual and provides explanations that can lead to a better understanding of the feature contribution in smaller groups of individuals that are often overlooked by the global interpretation techniques. We will use two moduels for interpreting single instances - SHAP and LIME. Three examples are selected from the test dataset, which are falsley predicted by the first model.
SHAP leverages the idea of Shapley values for model feature influence scoring. The technical definition of a Shapley value is the “average marginal contribution of a feature value over all possible coalitions.” In other words, Shapley values consider all possible predictions for an instance using all possible combinations of inputs. Because of this exhaustive approach, SHAP can guarantee properties like consistency and local accuracy. LIME, on the other hand, does not offer such guarantees.
LIME (Local Interpretable Model-agnostic Explanations) builds sparse linear models around each prediction to explain how the black box model works in that local vicinity. While treating the model as a black box, we perturb the instance we want to explain and learn a sparse linear model around it, as an explanation. LIME has the advantage over SHAP, that it is a lot faster.
examples = get_test_examples(models[0], ExampleType.FALSELY_CLASSIFIED, 3)
In this first example we see that only the first model classified the example as '>50k' with a high probability, due to the high capital-gain and material-status=Married-civ-spouse.
for model in models:
explanation = explain_single_instance(model, LocalInterpreterType.LIME, examples[0])
explanation.show_in_notebook(show_table=True, show_all=True)
explanation = explain_single_instance(model, LocalInterpreterType.SHAP, examples[0])
display(explanation)
The second example is falsely classified by both Model 1 and Model 2. As we can see in the explanations, the examples are falsely classified due to the value (0) of capital-gain, which negatively impact the prediction.
for model in models:
explanation = explain_single_instance(model, LocalInterpreterType.LIME, examples[1])
explanation.show_in_notebook(show_table=True, show_all=True)
explanation = explain_single_instance(model, LocalInterpreterType.SHAP, examples[1])
display(explanation)
In the third example we observe an example that is falsely classified only by Model 1. The classifier is mostly influenced by the feature: capital-loss.
for model in models:
explanation = explain_single_instance(model, LocalInterpreterType.LIME, examples[2])
explanation.show_in_notebook(show_table=True, show_all=True)
explanation = explain_single_instance(model, LocalInterpreterType.SHAP, examples[2])
display(explanation)